在这项工作中,我们提出了一种新颖的方法,用于对训练有素的神经网络学习。特别是,我们根据层的传输函数形成Bregman的差异,并通过合并平均向量并将主方向归一化,并构造原始Bregman PCA公式的扩展,并将主方向归一化,相对于围绕平均值的局部凸功能的几何形状。这种概括允许将学习的表示形式导出为具有非线性的固定层。作为知识蒸馏的应用,我们为学生网络的学习问题提出了预测教师表示的压缩系数,这些内容被作为输入到导入层的输入。我们的经验发现表明,与使用教师的倒数第二层表示和软标签相比,与典型的教师培训相比,我们的方法在网络之间传输信息更为有效。
translated by 谷歌翻译
在线性回归中,我们希望根据少量样本估算超过$ d $维的输入点和实价响应的最佳最小二乘预测。根据标准随机设计分析,其中绘制样品i.i.d。从输入分布中,该样品的最小二乘解决方案可以看作是最佳的自然估计器。不幸的是,该估计器几乎总是产生来自输入点的随机性的不良偏置,这在模型平均中是一个重要的瓶颈。在本文中,我们表明可以绘制非i.i.d。输入点的样本,无论响应模型如何,最小二乘解决方案都是最佳的无偏估计器。此外,可以通过增强先前绘制的I.I.D。可以有效地生产该样本。带有额外的$ d $点的样品,根据点由点跨越的平方量重新缩放的输入分布构建的一定确定点过程,共同绘制。在此激励的基础上,我们开发了一个理论框架来研究体积响应的采样,并在此过程中证明了许多新的矩阵期望身份。我们使用它们来表明,对于任何输入分布和$ \ epsilon> 0 $,有一个随机设计由$ o(d \ log d+ d+ d+ d/\ epsilon)$点,从中可以从中构造出无偏见的估计器,其预期的是正方形损耗在整个发行版中,$ 1+\ epsilon $ times最佳损失。我们提供有效的算法来在许多实际设置中生成这种无偏估计量,并在实验中支持我们的主张。
translated by 谷歌翻译
我们检查机器学习中出现的组合概念与立方/单纯几何形状中的拓扑概念之间的连接。这些连接使得从几何形状导出到机器学习的结果。我们的第一个主要结果是基于Tracy Hall(2004)的几何结构,其局部炮击的交叉多容院不能延伸。我们使用它来得出最大类别的VC尺寸3,没有角落。从过去11年来,这反驳了在机器学习中的几个工作。特别地,它意味着最佳类别的最佳未标记的样本压缩方案的所有先前结构都是错误的。在积极的一面,我们为最大类提供了一个未标记的样品压缩方案的新建。我们打开我们的未标记的样品压缩方案是否延伸到充足(A.K.A.不平衡或极值)课程,这代表了最大类的自然和深远的概括。在解决这个问题方面,我们就关联立方体复合物的1骷髅的独特宿前方向提供了几何特征。
translated by 谷歌翻译
The link with exponential families has allowed $k$-means clustering to be generalized to a wide variety of data generating distributions in exponential families and clustering distortions among Bregman divergences. Getting the framework to work above exponential families is important to lift roadblocks like the lack of robustness of some population minimizers carved in their axiomatization. Current generalisations of exponential families like $q$-exponential families or even deformed exponential families fail at achieving the goal. In this paper, we provide a new attempt at getting the complete framework, grounded in a new generalisation of exponential families that we introduce, tempered exponential measures (TEM). TEMs keep the maximum entropy axiomatization framework of $q$-exponential families, but instead of normalizing the measure, normalize a dual called a co-distribution. Numerous interesting properties arise for clustering such as improved and controllable robustness for population minimizers, that keep a simple analytic form.
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Modelling and forecasting real-life human behaviour using online social media is an active endeavour of interest in politics, government, academia, and industry. Since its creation in 2006, Twitter has been proposed as a potential laboratory that could be used to gauge and predict social behaviour. During the last decade, the user base of Twitter has been growing and becoming more representative of the general population. Here we analyse this user base in the context of the 2021 Mexican Legislative Election. To do so, we use a dataset of 15 million election-related tweets in the six months preceding election day. We explore different election models that assign political preference to either the ruling parties or the opposition. We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods. These results demonstrate that analysis of public online data can outperform conventional polling methods, and that political analysis and general forecasting would likely benefit from incorporating such data in the immediate future. Moreover, the same Twitter dataset with geographical attributes is positively correlated with results from official census data on population and internet usage in Mexico. These findings suggest that we have reached a period in time when online activity, appropriately curated, can provide an accurate representation of offline behaviour.
translated by 谷歌翻译
Existing federated classification algorithms typically assume the local annotations at every client cover the same set of classes. In this paper, we aim to lift such an assumption and focus on a more general yet practical non-IID setting where every client can work on non-identical and even disjoint sets of classes (i.e., client-exclusive classes), and the clients have a common goal which is to build a global classification model to identify the union of these classes. Such heterogeneity in client class sets poses a new challenge: how to ensure different clients are operating in the same latent space so as to avoid the drift after aggregation? We observe that the classes can be described in natural languages (i.e., class names) and these names are typically safe to share with all parties. Thus, we formulate the classification problem as a matching process between data representations and class representations and break the classification model into a data encoder and a label encoder. We leverage the natural-language class names as the common ground to anchor the class representations in the label encoder. In each iteration, the label encoder updates the class representations and regulates the data representations through matching. We further use the updated class representations at each round to annotate data samples for locally-unaware classes according to similarity and distill knowledge to local models. Extensive experiments on four real-world datasets show that the proposed method can outperform various classical and state-of-the-art federated learning methods designed for learning with non-IID data.
translated by 谷歌翻译
This is paper for the smooth function approximation by neural networks (NN). Mathematical or physical functions can be replaced by NN models through regression. In this study, we get NNs that generate highly accurate and highly smooth function, which only comprised of a few weight parameters, through discussing a few topics about regression. First, we reinterpret inside of NNs for regression; consequently, we propose a new activation function--integrated sigmoid linear unit (ISLU). Then special charateristics of metadata for regression, which is different from other data like image or sound, is discussed for improving the performance of neural networks. Finally, the one of a simple hierarchical NN that generate models substituting mathematical function is presented, and the new batch concept ``meta-batch" which improves the performance of NN several times more is introduced. The new activation function, meta-batch method, features of numerical data, meta-augmentation with metaparameters, and a structure of NN generating a compact multi-layer perceptron(MLP) are essential in this study.
translated by 谷歌翻译
The existing methods for video anomaly detection mostly utilize videos containing identifiable facial and appearance-based features. The use of videos with identifiable faces raises privacy concerns, especially when used in a hospital or community-based setting. Appearance-based features can also be sensitive to pixel-based noise, straining the anomaly detection methods to model the changes in the background and making it difficult to focus on the actions of humans in the foreground. Structural information in the form of skeletons describing the human motion in the videos is privacy-protecting and can overcome some of the problems posed by appearance-based features. In this paper, we present a survey of privacy-protecting deep learning anomaly detection methods using skeletons extracted from videos. We present a novel taxonomy of algorithms based on the various learning approaches. We conclude that skeleton-based approaches for anomaly detection can be a plausible privacy-protecting alternative for video anomaly detection. Lastly, we identify major open research questions and provide guidelines to address them.
translated by 谷歌翻译
The Government of Kerala had increased the frequency of supply of free food kits owing to the pandemic, however, these items were static and not indicative of the personal preferences of the consumers. This paper conducts a comparative analysis of various clustering techniques on a scaled-down version of a real-world dataset obtained through a conjoint analysis-based survey. Clustering carried out by centroid-based methods such as k means is analyzed and the results are plotted along with SVD, and finally, a conclusion is reached as to which among the two is better. Once the clusters have been formulated, commodities are also decided upon for each cluster. Also, clustering is further enhanced by reassignment, based on a specific cluster loss threshold. Thus, the most efficacious clustering technique for designing a food kit tailored to the needs of individuals is finally obtained.
translated by 谷歌翻译